56 research outputs found

    SecureCyclon: Dependable Peer Sampling

    Full text link
    Overlay management is the cornerstone of building robust and dependable Peer-to-Peer systems. A key component for building such overlays is the peer-sampling service, a mechanism that continuously supplies each node with a set of up-to-date peers randomly selected across all alive nodes. Arguably, the most pernicious malicious action against such mechanisms is the provision of arbitrarily created links that point at malicious nodes. This paper proposes SecureCyclon, a peer-sampling protocol that deterministically eliminates the ability of malicious nodes to overrepresent themselves in Peer-to-Peer overlays. To the best of our knowledge, this is the first protocol to offer this property, as previous works were able to only bound the proportion of excessive links to malicious nodes, without completely eliminating them. SecureCyclon redefines the concept of node descriptors from just being containers of information that enable communication with specific nodes, to being communication certificates that traverse the network and enable nodes to provably discover malicious nodes. We evaluate our solution with the conduction of extended simulations, and we demonstrate that it provides resilience even at the extreme condition of 40% malicious node participation.Comment: 12 pages, 7 figures, ICDCS 202

    GDCluster: a general decentralized clustering algorithm

    Get PDF
    In many popular applications like peer-to-peer systems, large amounts of data are distributed among multiple sources. Analysis of this data and identifying clusters is challenging due to processing, storage, and transmission costs. In this paper, we propose GDCluster, a general fully decentralized clustering method, which is capable of clustering dynamic and distributed data sets. Nodes continuously cooperate through decentralized gossip-based communication to maintain summarized views of the data set. We customize GDCluster for execution of the partition-based and density-based clustering methods on the summarized views, and also offer enhancements to the basic algorithm. Coping with dynamic data is made possible by gradually adapting the clustering model. Our experimental evaluations show that GDCluster can discover the clusters efficiently with scalable transmission cost, and also expose its supremacy in comparison to the popular method LSP2P

    PULP: an Adaptive Gossip-Based Dissemination Protocol for Multi-Source Message Streams

    Get PDF
    Gossip-based protocols provide a simple, scalable, and robust way to disseminate messages in large-scale systems. In such protocols, messages are spread in an epidemic manner. Gossiping may take place between nodes using push, pull, or a combination. Push-based systems achieve reasonable latency and high resilience to failures but may impose an unnecessarily large redundancy and overhead on the system. At the other extreme, pull-based protocols impose a lower overhead on the network at the price of increased latencies. A few hybrid approaches have been proposed-typically pushing control messages and pulling data-to avoid the redundancy of high-volume content and single-source streams. Yet, to the best of our knowledge, no other system intermingles push and pull in a multiple-senders scenario, in such a way that data messages of one help in carrying control messages of the other and in adaptively adjusting its rate of operation, further reducing overall cost and improving both on delays and robustness. In this paper, we propose an efficient generic push-pull dissemination protocol, Pulp, which combines the best of both worlds. Pulp exploits the efficiency of push approaches, while limiting redundant messages and therefore imposing a low overhead, as pull protocols do. Pulp leverages the dissemination of multiple messages from diverse sources: by exploiting the push phase of messages to transmit information about other disseminations, Pulp enables an efficient pulling of other messages, which themselves help in turn with the dissemination of pending messages. We deployed Pulp on a cluster and on PlanetLab. Our results demonstrate that Pulp achieves an appealing trade-off between coverage, message redundancy, and propagation delay. © 2011 Springer Science+Business Media, LLC

    BooSTER: Broadcast Stream Transmission Epidemic Repair

    No full text
    Abstract—Wireless broadcasting systems, such as Digital Video Broadcasting (DVB), are subject to signal degradation, having an effect on end users ’ reception quality. Reception quality can be improved by increasing signal strength, but this comes at a significantly increased energy use and still without guaranteeing error-free reception. In this paper we present BOOSTER, a fully decentralized epidemic-based system that boosts reception quality by cooperatively repairing lossy packet streams among the community of DVB viewers. To validate our system, we collected real data by deploying a set of DVB receivers geographically distributed in and around Amsterdam and Utrecht, The Netherlands. We implemented and tested our system in PeerSim, using our collected real trace information as input. We present in detail the crucial design decisions, the algorithms that underpin our system, the realistic experimental methodology, as well as extensive results that demonstrate the feasibility and efficiency of this approach. In particular we conclude that the upload bandwidth required by each node for significant recovery of a real DVB broadcast is in the order of 5KB/sec when nodes allow up to 2 second delay for repairing – a rather trivial bandwidth for today’s typical ADSL connections with an acceptable introduced delay. I
    • …
    corecore